Goto

Collaborating Authors

 neuro-symbolic reasoning


Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

Neural Information Processing Systems

Human reasoning can be understood as an interplay between two systems: the intuitive and associative (System 1) and the deliberative and logical (System 2). Neural sequence models---which have been increasingly successful at performing complex, structured tasks---exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.


Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

Neural Information Processing Systems

Human reasoning can be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2"). Neural sequence models---which have been increasingly successful at performing complex, structured tasks---exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.


Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

Neural Information Processing Systems

Human reasoning can be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2"). Neural sequence models---which have been increasingly successful at performing complex, structured tasks---exhibit the advantages and failure modes of System 1: they are fast and learn patterns from data, but are often inconsistent and incoherent. In this work, we seek a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning. We explore several variations on this theme in which candidate generations from a neural sequence model are examined for logical consistency by a symbolic reasoning module, which can either accept or reject the generations. Our approach uses neural inference to mediate between the neural System 1 and the logical System 2. Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.


AI Weekly: AI prosecutors and pong-playing neurons closed out 2021

#artificialintelligence

In the week that drew 2021 to a close, the tech news cycle died down, as it typically does. Even an industry as fast-paced as AI needs a reprieve, sometimes -- especially as a new COVID-19 variant upends plans and major conferences. But that isn't to say late December wasn't eventful. One of the most talked-about stories came from the South China Morning Post (SCMP), which described an "AI prosecutor" developed by Chinese researchers that can reportedly identify crimes and press charges "with 97% accuracy." The system -- which was trained on 1,000 "traits" sourced from 17,000 real-life cases of crimes from 2015 to 2020, like gambling, reckless driving, theft, and fraud -- recommends sentences given a brief text description.


An energy-based model for neuro-symbolic reasoning on knowledge graphs

Dold, Dominik, Garrido, Josep Soler

arXiv.org Artificial Intelligence

Data generated this way are incredibly sparse, i.e., only a Multi-relational knowledge graphs (KGs) [1] are rich data tiny fraction of possible triples are observed or even valid, structures used to model a variety of systems like industrial as well as streaming in nature such that triples can appear projects [2] and mathematical proofs [3]. It is therefore not multiple times and underlie stochastic variations. Using graph surprising that the interest in machine learning algorithms embedding, we reformulate the anomaly detection task as capable of dealing with graph-structured data has increased a link prediction task: events in the automation system are lately [4]. This broad applicability of graphs becomes apparent equivalent to new edges appearing in its graph representation when summarizing them as lists of triple statements that can be evaluated using the learned embeddings. However, (node, edge, node), e.g., (M.Hamill, plays, L.Skywalker) and we found that standard graph embedding algorithms perform (L.Skywalker, appearsIn, StarWars) - with individual entries poorly on such industrial graphs, mainly because they expect being called subject, predicate and object.